- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0003000002000000
- More
- Availability
-
50
- Author / Contributor
- Filter by Author / Creator
-
-
Strong, Matthew (4)
-
Lou, Yingli (2)
-
Upadhyaya, Satish (2)
-
Wang, Gang (2)
-
Ye, Yunyang (2)
-
Zuo, Wangda (2)
-
Banahene, Kwabena Oppong (1)
-
Camps, Gadiel Sznaier (1)
-
Chen, Degang (1)
-
Cohen, Morris (1)
-
Do, Won Kyung (1)
-
Gadogbe, Bryce (1)
-
Geiger, Randall L. (1)
-
Kennedy, Monroe (1)
-
Payne, Chris (1)
-
Schwager, Mac (1)
-
Strong, Matthew R. (1)
-
Swann, Aiden (1)
-
Yang, Yizhi (1)
-
#Tyler Phillips, Kenneth E. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Swann, Aiden; Strong, Matthew; Do, Won Kyung; Camps, Gadiel Sznaier; Schwager, Mac; Kennedy, Monroe (, IEEE)In this work, we propose a novel method to supervise 3D Gaussian Splatting (3DGS) scenes using optical tactile sensors. Optical tactile sensors have become widespread in their use in robotics for manipulation and object representation; however, raw optical tactile sensor data is unsuitable to directly supervise a 3DGS scene. Our representation leverages a Gaussian Process Implicit Surface to implicitly represent the object, combining many touches into a unified representation with uncertainty. We merge this model with a monocular depth estimation network, which is aligned in a two stage process, coarsely aligning with a depth camera and then finely adjusting to match our touch data. For every training image, our method produces a corresponding fused depth and uncertainty map. Utilizing this additional information, we propose a new loss function, variance weighted depth supervised loss, for training the 3DGS scene model. We leverage the DenseTact optical tactile sensor and RealSense RGB-D camera to show that combining touch and vision in this manner leads to quantitatively and qualitatively better results than vision or touch alone in a few-view scene syntheses on opaque as well as on reflective and transparent objects. Please see our project page at armlabstanford.github.io/touch-gsmore » « less
-
Lou, Yingli; Ye, Yunyang; Yang, Yizhi; Zuo, Wangda; Wang, Gang; Strong, Matthew; Upadhyaya, Satish; Payne, Chris (, Science and Technology for the Built Environment)
-
Banahene, Kwabena Oppong; Strong, Matthew R.; Gadogbe, Bryce; Chen, Degang; Geiger, Randall L. (, IEEE International Symposium on Circuits and Systems)
-
Ye, Yunyang; Lou, Yingli; Strong, Matthew; Upadhyaya, Satish; Zuo, Wangda; Wang, Gang (, Science and Technology for the Built Environment)
An official website of the United States government
